Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
Jon Retting has released vscreen, a Rust service that gives AI agents a full Chromium browser with live WebRTC streaming — you see exactly what the AI sees in real-time and can take over mouse and keyboard at any point. The project provides 63 MCP (Model Context Protocol) tools for browser automation: navigation, screenshots, element discovery, cookie/CAPTCHA handling, and multi-agent coordination via lease-based locking.
Built from scratch in Rust — not a Puppeteer wrapper — the codebase is ~31,000 lines across 8 crates with unsafe forbidden, 510+ tests, 3 fuzz targets, and supply chain auditing via cargo-deny. Available as pre-built Linux binaries and Docker images. Source-available, non-commercial license.
https://github.com/jameswebb68/vscreen
https://dev.to/lowjax/vscreen-deep-dive-how-63-mcp-tools-let-ai-agents-actually-use-the-internet-4gij
https://dev.to/lowjax/i-built-a-tool-that-lets-ai-agents-browse-the-real-internet-and-you-can-watch-them-do-it-2fff
Total anonymity online is impossible, and it's dangerous to claim otherwise:
To be fair, not all VPN companies are pushing this false narrative -- CNET’s picks for the best VPNs are all very clear about what their services can and can’t do. But too many companies, including a few high-profile VPN providers, continue to keep the myth alive.
Even a VPN provider as established and well-known as CyberGhost continues to promote this dangerous falsehood. The company boldly states on its website that its service can help users “go completely anonymous and surf the internet without privacy worries,” and that they can “enjoy complete anonymity & protection online” with CyberGhost.
To be fair, CyberGhost does mention in an FAQ section tucked away at the bottom of its home page that “no VPN service can make you 100% anonymous online,” but the messaging from the company is nonetheless confusing and avoidable.
This isn't just a case of harmless exaggerated marketing -- it's reckless. Using a VPN while under the impression that it's a silver bullet for online anonymity can put you in a bad spot, even if you have nothing to hide. If you use a social media platform to share sensitive information online with someone, or if you're an investigative journalist in a region whose government practices oppressive digital surveillance, you'll still be at risk, even with a VPN.
You can't simply throw good judgment and all other basic privacy principles out the window just because you think your VPN gives you an all-encompassing invisibility cloak on the internet whenever you switch it on. It's time to dial back the hyperbole and be clear about how a VPN can and can't protect you online, starting with why all this talk about data matters.
[...] Whenever you’re logged in to a service like Google, Facebook, TikTok, Instagram, X, Amazon or Netflix, all of your activity on those platforms can be tracked by the companies and linked directly back to you. Data related to the search terms you enter, links you click on, videos you watch, items you purchase, ads you interact with and content you share are all collected and used to create a detailed profile on your interests and online habits.
Additionally, personal information such as your name, username, address, payment data and email address, along with unique identifiers like your IP address, browser type, device type and operating system can all be tracked.
[...] Yet none of this stops some VPN providers from saying that VPNs can make you totally anonymous online.
In reality, VPNs are just a small piece of the much greater online privacy and security puzzle. VPNs like Mullvad and Windscribe let you sign up and use their services without supplying any personal information whatsoever -- which is about as close as you can get to anonymity with a VPN. Other providers like Proton, NordVPN, ExpressVPN and Surfshark offer additional privacy and security services on top of a VPN that you can bundle under a single subscription, which can help you better round out your cybersecurity toolkit.
Everyday citizens simply looking to boost their online protections should be fine with a VPN, password manager and antivirus. But if you're an activist, lawyer, whistleblower, investigative journalist or anyone else with critical privacy needs, there's a lot more you should do to protect yourself and become as anonymous as possible online.
[...] While neither a VPN nor any single privacy or security tool can guarantee you anonymity, a well-rounded cybersecurity toolkit, some strategic actions and a little bit of common sense can go a long way toward protecting your privacy.
By testing agent-to-agent interactions, researchers observed catastrophic system failures. Here's why that's bad news for everyone:
An increasing body of work points to the risks of agentic AI, such as last week's report by MIT and collaborators that documented a lack of oversight, measurement, and control for agents.
However, what happens when one AI agent meets another? Evidence suggests things can turn even worse, according to a report published this week by scholars at Stanford University, Northwestern, Harvard, Carnegie Mellon, and several other institutions.
The result of agent-to-agent interaction was the destruction of server computers, denial-of-service attacks, vast over-consumption of computing resources, and the "systematic escalation of minor errors into catastrophic system failures."
"When agents interact with each other, individual failures compound and qualitatively new failure modes emerge," wrote lead author Natalie Shapira of Northeastern University and collaborators in the report, 'Agents of Chaos.'
"This is a critical dimension of our findings," Shapira and team wrote, "because multi-agent deployment is increasingly common and most existing safety evaluations focus on single-agent settings."
The findings are especially timely given that multi-agent interactions have burst into the mainstream of AI with the recent fervor over the bot social platform Moltbook. That kind of multi-agent hub makes it possible for agentic AI systems to exchange data and carry out instructions on one another that weren't previously possible, largely without any humans in the loop.
The report, which can be downloaded from the arXiv pre-print server, describes a 'red team' test of interacting agents over two weeks, with attempts to find weaknesses in a system by simulating hostile behavior.
What emerged in the research is a system in which humans are mostly absent. Bots send information back and forth, and instruct each other to carry out commands.
Among the many disturbing findings are agents that spread potentially destructive instructions to other agents, agents that mutually reinforce bad security practices via an echo chamber, and agents that engage in potentially endless interactions, consuming vast system resources with no clear purpose.
[...] The premise of the researchers' work is that agentic AI can carry out actions without a person typing in a prompt, as you do with ChatGPT. Agentic AI can be given access to various resources through which to carry out actions. Those resources include email accounts and other communication channels, such as Discord, Signal, Telegram, and more. As they use email and these channels, bots can not only carry out actions but also communicate with and act on other bots.
[...] Among fundamental issues, the underlying LLMs treated both data and commands at the prompt as the same thing, leading to prompt injection.
In the interactions, the authors identified a boundary problem. Agents disclosed "artifacts," such as information obtained from email servers or Discord, without an apparent sense of who should see the information. At the heart of that approach was a lack of a "reliable private deliberation surface in deployed agent stacks." In short, an individual LLM may or may not disclose "reasoning" steps at the prompt. But agents seem to lack well-crafted guardrails and will disclose information in many ways.
The agents also had "no self-model," by which they mean, "agents in our study take irreversible, user-affecting actions without recognizing they are exceeding their own competence boundaries." An example of this issue is when two agents agree to engage in a back-and-forth dialogue without a human, pursuing that approach indefinitely, exhausting system resources.
In an infinite-loop scenario, agents may interact indefinitely, leading to an "infinite loop" and consequent exhaustion of system resources.
"The agents exchanged ongoing messages over the course of at least nine days," the researchers wrote, "consuming approximately 60,000 tokens at the time of writing." Tokens are how OpenAI and others price access to their cloud APIs. Consuming more tokens inflates AI costs, which is already a big issue in an era of rising prices.
The bottom line is that someone has to take responsibility for what is contingent and what is fundamental, and find solutions for both.
Right now, there is no responsibility for an agent per se, noted the researchers: "These behaviors expose a fundamental blind spot in current alignment paradigms: while agents and surrounding humans often implicitly treat the owner as the responsible party, the agents do not reliably behave as if they are accountable to that owner."
That concern means everyone building these systems must deal with the lack of responsibility: "We argue that clarifying and operationalizing responsibility may be a central unresolved challenge for the safe deployment of autonomous, socially embedded AI systems."
arXiv link: https://arxiv.org/abs/2602.20021
"Ultimately, we want to build a fleet of electric harvesters"
The Moon has received a lot of attention in recent months, particularly the surface of Earth's cold and dusty companion.
This has largely been driven by a decision from SpaceX founder Elon Musk to pivot, at least in the near term, from Mars to lunar surface activities and the potential for using material there to build large satellites. But there has been a notable shift from NASA, too, which has started talking a lot more about building up elements of a base on the surface rather than an orbiting space station known as the Gateway.
In short, the world's most successful space company and the largest space agency have both increased their lunar ambitions, suggesting a greater frequency of missions to the Moon in the coming years.
For companies that have long-term business plans focused around the surface of the Moon, these are very positive developments. And two of these lunar startups, Astrolab and Interlune, announced Tuesday morning they are forming a partnership amid this favorable environment.
Astrolab is one of three firms vying to build rovers for NASA's scientific activities on the surface of the Moon, as well as to provide transportation for its astronauts. But the company has been working with commercial customers as well, and one of the most important long-term ones could be a Helium-3 mining company called Interlune.
"Ultimately, we want to build a fleet of electric harvesters that will go to the Moon and excavate, extract and separate Helium-3 from the lunar regolith," said Interlune chief executive Rob Meyerson. "The FLEX Rover is a great platform to go do that."
This is not the first time the two companies have worked together. Last August, Interlune announced that it would fly a multispectral camera on a smaller prototype rover being built by Astrolab. This camera will be used to estimate helium-3 quantities and concentration in Moon dirt, or regolith.
This FLIP rover, about the size of a go-kart, is due to launch later this year on a lunar lander built by Astrobotic. It will fly atop the Griffin lander, taking the place of NASA's VIPER rover, which has been moved to another spacecraft.
The mission will therefore be a learning exercise for both Astrolab, in testing out its software and other features of a small lunar rover, as well as Interlune, which will seek to ground truth data about the concentration of Helium-3 that has previously been estimated from samples returned to Earth during the Apollo program.
In addition to FLIP, Astrolab is developing a larger rover, FLEX, that is about the size of a minivan. This vehicle has a horseshoe-shaped chassis that can accommodate about 3 cubic meters of payload. This allows for a broad array of activities, from carrying multiple scientific instruments across the Moon and providing a long-distance rover for two astronauts, to moving large equipment or, in the case of Interlune, serving as a mobile harvester.
"Our thesis is to make the most versatile platform possible so we can serve a wide array of customers and achieve NASA's goal of being one customer among many," said Jaret Matthews, Astrolab founder and chief executive, in an interview. "So we have essentially a modular approach that allows us to either pick up cargo or implements or payloads. And so in this case, the excavating equipment that Interlune is developing would basically go under the belly of the rover."
The companies did not say when they are scheduled to deploy an initial harvester, but both are working toward that goal. It is likely that a FLEX rover will be one of the payloads on the first SpaceX Starship mission to the lunar surface—probably, but not certainly, the lunar demo mission without crew—planned to fly to the Moon in 2027 or 2028. And Interlune has been working with an industrial equipment manufacturer, Vermeer, to build a harvester to excavate and separate Helium-3 from the lunar surface.
Helium-3 does not occur naturally on Earth, and it exists in only very limited quantities from nuclear weapons tests, nuclear reactors, and radioactive decay. It has several applications, but the most near-term use is in cryogenics, Meyerson believes. The company has already announced contracts for the sale of thousands of liters for very low-temperature refrigeration. But first it must demonstrate the ability to mine and refine the material, which exists in small quantities in lunar soil, and get it back to Earth. This is a difficult challenge, of course, but having partners to move across the Moon and get to and from there helps a lot.
Astrolab and Interlune plan to undertake prototype testing of a mobile harvester in Houston, where there is a new commercial facility known as the Texas A&M University Space Institute. This institute is currently under construction at NASA's Johnson Space Center as the space agency seeks to broaden support for commercial space activities.
https://www.os2museum.com/wp/dos-memory-management/
The memory management in DOS is simple, but that simplicity may be deceptive. There are several rather interesting pitfalls that programming documentation often does not mention.
DOS 1.x (1981) had no explicit memory management support. It was designed to run primarily on machines with 64K RAM or less, or not too much more (the original PC could not have more than 64K RAM on the system board, although RAM expansion boards did exist). A COM program could easily access (almost) 64K memory when loaded, and many programs didn't rely on even having that much. In fact the early PCs often only had 64K or 48K RAM installed. But the times were rapidly changing.
DOS 2.0 was developed to support the IBM PC/XT (introduced in March 1983), which came with 128K RAM standard, and models with 256K appeared soon enough. Even the older PCs could be upgraded with additional RAM, and DOS needed to have some mechanism to deal with that extra memory.
The DOS memory management was probably written sometime around summer 1982, and it meshed with the newly added process management functions (EXEC/EXIT/WAIT)—allocated memory is owned by the current process, and gets freed when that process terminates. Note that some versions of the memory manager source code (ALLOC.ASM) include a comment that says 'Created: ARR 30 March 1983'. That cannot possibly be true because by the end of March 1983, PC DOS 2.0 was already released, and included the memory management support. The DOS 2.0 memory management functions were already documented in the PC DOS 2.0 manual dated January 1983.
A 33% leap in capacity in six months is an impressive feat:
According to Micron, the new sticks are the first to employ its 32 Gb (4 GB) LPDDR5X monolithic dies, where "monolithic" means all the memory and relevant circuitry are part of a single die.
In an AI future where context is literally everything, every gigabyte of memory closer to the xPUs in a system matters, and Micron's advancement today will doubtless be found in massive AI server installations worldwide as companies allocate hundreds of billions of dollars of capex in the race toward AI supremacy.
The SOCAMM2 form factor is the result of a partnership between Nvidia and memory makers Micron, Samsung, and SK hynix. The SOCAMM standard was originally designed by Nvidia, but the accelerator mogul reportedly had trouble getting the modules to operate without overheating on high-density servers. CEO Jensen Huang wisely teamed up with the folks who make computer memory for a living, resulting in SOCAMM2s with growing density and lower power consumption.
https://codon.org.uk/~mjg59/blog/p/to-update-blobs-or-not-to-update-blobs/
A lot of hardware runs non-free software. Sometimes that non-free software is in ROM. Sometimes it’s in flash. Sometimes it’s not stored on the device at all, it’s pushed into it at runtime by another piece of hardware or by the operating system. We typically refer to this software as “firmware” to differentiate it from the software run on the CPU after the OS has started but a lot of it (and, these days, probably most of it) is software written in C or some other systems programming language and targeting Arm or RISC-V or maybe MIPS and even sometimes x86. There’s no real distinction between it and any other bit of software you run, except it’s generally not run within the context of the OS. Anyway. It’s code. I’m going to simplify things here and stop using the words “software” or “firmware” and just say “code” instead, because that way we don’t need to worry about semantics.
A fundamental problem for free software enthusiasts is that almost all of the code we’re talking about here is non-free. In some cases, it’s cryptographically signed in a way that makes it difficult or impossible to replace it with free code. In some cases it’s even encrypted, such that even examining the code is impossible. But because it’s code, sometimes the vendor responsible for it will provide updates, and now you get to choose whether or not to apply those updates.
I’m now going to present some things to consider. These are not in any particular order and are not intended to form any sort of argument in themselves, but are representative of the opinions you will get from various people and I would like you to read these, think about them, and come to your own set of opinions before I tell you what my opinion is.
Does this blob do what it claims to do? Does it suddenly introduce functionality you don’t want? Does it introduce security flaws? Does it introduce deliberate backdoors? Does it make your life better or worse?
You’re almost certainly being provided with a blob of compiled code, with no source code available. You can’t just diff the source files, satisfy yourself that they’re fine, and then install them. To be fair, even though you (as someone reading this) are probably more capable of doing that than the average human, you’re likely not doing that even if you are capable because you’re also likely installing kernel upgrades that contain vast quantities of code beyond your ability to understand. We don’t rely on our personal ability, we rely on the ability of those around us to do that validation, and we rely on an existing (possibly transitive) trust relationship with those involved. You don’t know the people who created this blob, you likely don’t know people who do know the people who created this blob, these people probably don’t have an online presence that gives you more insight. Why should you trust them?
If it’s in ROM and it turns out to be hostile then nobody can fix it ever
The people creating these blobs largely work for the same company that built the hardware in the first place. When they built that hardware they could have backdoored it in any number of ways. And if the hardware has a built-in copy of the code it runs, why do you trust that that copy isn’t backdoored? Maybe it isn’t and updates would introduce a backdoor, but in that case if you buy new hardware that runs new code aren’t you putting yourself at the same risk?
Designing hardware where you’re able to provide updated code and nobody else can is just a dick move. We shouldn’t encourage vendors who do that.
Even if blobs are signed and can’t easily be replaced, the ones that aren’t encrypted can still be examined. The SSD vulnerabilities above were identifiable because researchers were able to reverse engineer the updates. It can be more annoying to audit binary code than source code, but it’s still possible.
Replacing one non-free blob with another non-free blob increases the total number of non-free blobs involved in the whole system, but doesn’t increase the number that are actually executing at any point in time.
Ok we’re done with the things to consider. Please spend a few seconds thinking about what the tradeoffs are here and what your feelings are. Proceed when ready.
I trust my CPU vendor. I don’t trust my CPU vendor because I want to, I trust my CPU vendor because I have no choice. I don’t think it’s likely that my CPU vendor has designed a CPU that identifies when I’m generating cryptographic keys and biases the RNG output so my keys are significantly weaker than they look, but it’s not literally impossible. I generate keys on it anyway, because what choice do I have? At some point I will buy a new laptop because Electron will no longer fit in 32GB of RAM and I will have to make the same affirmation of trust, because the alternative is that I just don’t have a computer. And in any case, I will be communicating with other people who generated their keys on CPUs I have no control over, and I will also be relying on them to be trustworthy. If I refuse to trust my CPU then I don’t get to computer, and if I don’t get to computer then I will be sad. I suspect I’m not alone here.
And the straightforward answer is that theoretically it could include new code that doesn’t act in my interests, either deliberately or not. And, yes, this is theoretically possible. Of course, if you don’t trust your CPU vendor, why are you buying CPUs from them, but well maybe they’ve been corrupted (in which case don’t buy any new CPUs from them either) or maybe they’ve just introduced a new vulnerability by accident, and also you’re in a position to determine whether the alleged security improvements matter to you at all. Do you care about speculative execution attacks if all software running on your system is trustworthy? Probably not! Do you need to update a blob that fixes something you don’t care about and which might introduce some sort of vulnerability? Seems like no!
But there’s a difference between a recommendation for a fully informed device owner who has a full understanding of threats, and a recommendation for an average user who just wants their computer to work and to not be ransomwared. A code update on a wifi card may introduce a backdoor, or it may fix the ability for someone to compromise your machine with a hostile access point. Most people are just not going to be in a position to figure out which is more likely, and there’s no single answer that’s correct for everyone. What we do know is that where vulnerabilities in this sort of code have been discovered, updates have tended to fix them - but nobody has flagged such an update as a real-world vector for system compromise.
My personal opinion? You should make your own mind up, but also you shouldn’t impose that choice on others, because your threat model is not necessarily their threat model. Code updates are a reasonable default, but they shouldn’t be unilaterally imposed, and nor should they be blocked outright. And the best way to shift the balance of power away from vendors who insist on distributing non-free blobs is to demonstrate the benefits gained from them being free - a vendor who ships free code on their system enables their customers to improve their code and enable new functionality and make their hardware more attractive.
It’s impossible to say with absolute certainty that your security will be improved by installing code blobs. It’s also impossible to say with absolute certainty that it won’t. So far evidence tends to support the idea that most updates that claim to fix security issues do, and there’s not a lot of evidence to support the idea that updates add new backdoors. Overall I’d say that providing the updates is likely the right default for most users - and that that should never be strongly enforced, because people should be allowed to define their own security model, and whatever set of threats I’m worried about, someone else may have a good reason to focus on different ones.
Code that runs on the CPU before the OS is still usually described as firmware - UEFI is firmware even though it’s executing on the CPU, which should give a strong indication that the difference between “firmware” and “software” is largely arbitrary
Because UEFI makes everything more complicated, UEFI makes this more complicated. Triggering a UEFI runtime service involves your OS jumping into firmware code at runtime, in the same context as the OS kernel. Sometimes this will trigger a jump into System Management Mode, but other times it won’t, and it’s just your kernel executing code that got dumped into RAM when your system booted.
I don’t understand most of the diff between one kernel version and the next, and I don’t have time to read all of it either.
There’s a bunch of reasons to do this, the most reasonable of which is probably not wanting customers to replace the code and break their hardware and deal with the support overhead of that, but not being able to replace code running on hardware I own is always going to be an affront to me.
Supreme court declines to hear dispute over copyright in regards to AI generated art. So AI generated art is not copyrightable. If that is the case are other things generated by AI? Code?
Computer scientist Stephen Thaler has once again failed before the US Supreme Court. The Supreme Court of the USA refused on Monday to address the question of whether art created by artificial intelligence (AI) can be protected by copyright under US law and dismissed a lawsuit by Thaler. The case has been dealt with by various courts over several years.
From Reuters:
The U.S. Supreme Court declined on Monday to take up the issue of whether art generated by artificial intelligence can be copyrighted under U.S. law, turning away a case involving a computer scientist from Missouri who was denied a copyright for a piece of visual art made by his AI system.
Plaintiff Stephen Thaler had appealed to the justices after lower courts upheld a U.S. Copyright Office decision that the AI-crafted visual art at issue in the case was ineligible for copyright protection because it did not have a human creator.
Thaler, of St. Charles, Missouri, applied for a federal copyright registration in 2018 covering "A Recent Entrance to Paradise," visual art he said his AI technology "DABUS" created. The image shows train tracks entering a portal, surrounded by what appears to be green and purple plant imagery.
The Copyright Office rejected his application in 2022, finding that creative works must have human authors to be eligible to receive a copyright.
U.S. President Donald Trump's administration had urged the Supreme Court not to hear Thaler's appeal.
The Copyright Office has separately rejected bids by artists for copyrights on images generated by the AI system Midjourney. Those artists argued that they were entitled to copyrights for images they created with AI assistance - unlike Thaler, who said his system created "A Recent Entrance to Paradise" independently.
A federal judge in Washington upheld the office's decision in Thaler's case in 2023, writing that human authorship is a "bedrock requirement of copyright." The U.S. Court of Appeals for the District of Columbia Circuit affirmed the ruling in 2025.
Thaler's lawyers told the Supreme Court in a filing that his case was of "paramount importance" considering the rapid rise of generative AI.
With a refusal by the court to hear the appeal, Thaler's lawyers said, "even if it later overturns the Copyright Office's test in another case, it will be too late. The Copyright Office will have irreversibly and negatively impacted AI development and use in the creative industry during critically important years."
https://www.theverge.com/policy/887678/supreme-court-ai-art-copyright
https://www.reuters.com/legal/government/us-supreme-court-declines-hear-dispute-over-copyrights-ai-generated-material-2026-03-02/
https://www.heise.de/en/news/Copyright-dispute-over-AI-generated-art-US-Supreme-Court-dismisses-case-11196323.html
As per the quote above, the UAE data center was impacted most severely by the drones. From broader reporting of the conflict, we assume these drone strikes are part of Iran’s response to U.S. Operation Epic Fury and Israeli Operation Roaring Lion strikes on Iranian targets over the weekend. Both the UAE and Bahrain data centers were hit by drones in the early hours of March 1. Whether Iran purposely targeted AWS facilities, we cannot say for certain.
While engineers are working to safely restore the full gamut of AWS services, the firm says that it “strongly recommend[s] that customers with workloads running in the Middle East take action now to migrate those workloads to alternate AWS Regions.” It would be wise to enact disaster recovery plans, recover from remote backups stored in other Regions, and update applications to direct traffic away from the UAE, for now, too.
As with ME-CENTRAL-1, above, AWS is recommending users migrate or replicate their ME-SOUTH-1 Region data to another AWS Region.
These are some of the first ‘tech’ impacts we have seen precipitated by the 2026 Iran Conflict. They surely won’t be the last, with shipping, the costs of raw materials, and energy resources already rapidly inflating due to emerging geopolitical risks and pressures.
NASA has fixed the problem that forced the removal of the rocket for the Artemis II mission from its launch pad last month, but it will be a couple of weeks before officials are ready to move the vehicle back into the starting blocks at Kennedy Space Center in Florida.
The 322-foot-tall (98-meter) rocket could have launched as soon as this week after it passed a key fueling test on February 21. During that test, NASA loaded the Space Launch System rocket with super-cold propellants without any major problems, apparently overcoming a persistent hydrogen leak that prevented the mission from launching in early February.
However, another problem cropped up just one day after the successful fueling demo. Ground teams were unable to flow helium into the rocket's upper stage. Unlike the connections to the core stage, which workers can repair at the launch pad, the umbilical lines leading to the upper stage higher up the rocket are only accessible inside the cavernous Vehicle Assembly Building (VAB) at Kennedy.
Mission managers quickly decided to roll the rocket back to the assembly building for troubleshooting. The rocket returned to the VAB on February 25, and within a week, engineers found the source of the helium flow issue. Inspections revealed that a seal in the quick disconnect, through which helium flows from ground systems into the rocket, was obstructing the pathway, according to NASA.
"The team removed the quick disconnect, reassembled the system, and began validating the repairs to the upper stage by running a reduced flow rate of helium through the mechanism to ensure the issue was resolved," NASA said in an update posted Tuesday. "Engineers are assessing what allowed the seal to become dislodged to prevent the issue from recurring."
NASA is not expected to return the SLS rocket and Orion spacecraft to the launch pad until later this month. Inside the VAB, technicians will complete several other tasks to "refresh" the rocket for the next series of launch opportunities.
This work will include activating a new set of flight termination system batteries for the rocket's range safety destruct system, which would be used to destroy the vehicle if it veered off course during launch. Workers will also replace flight batteries on the SLS core stage, upper stage, and solid rocket boosters, and recharge the batteries on the Orion spacecraft's launch abort system, NASA said. At the bottom of the rocket, crews will replace a seal on the core stage liquid oxygen feed line.
NASA has not said whether the launch team will conduct another countdown rehearsal after it returns to Launch Complex 39B at Kennedy.
The first of five launch opportunities in early April is on April 1, with a two-hour launch window opening at 6:24 pm EDT (22:24 UTC). There are additional launch dates available on April 3, 4, 5, and 6. Each launch period has about five potential launch dates after accounting for several constraints on the mission trajectory, which will carry the Orion spacecraft and four astronauts around the far side of the Moon and back to Earth.
Artemis II will be the first human spaceflight to the vicinity of the Moon since 1972 and is the first crew mission for NASA's Artemis program, which aims to land astronauts on the lunar surface as early as 2028.
Web sites are increasingly trying to glean additional personally identifiable information from visitors in the name of authentication. Some nefarious interests actually do have a goal of tracking every minute interaction and communication tied to a real-world identity. However, if the goal is authentication and not just the collection of information, then all that is not necessary. Cryptographer and professor, Matthew Green, has a few thoughts on cryptographic engineering, specifically an illustrated primer on Anonymous credentials. He states the question as being, how do we live in a world with routine age-verification and human identification, without completely abandoning our privacy?
This post has been on my back burner for well over a year. This has bothered me, because every month that goes by I become more convinced that anonymous authentication the most important topic we could be talking about as cryptographers. This is because I’m very worried that we’re headed into a bit of a privacy dystopia, driven largely by bad legislation and the proliferation of AI.
But this is too much for a beginning. Let’s start from the basics.
One of the most important problems in computer security is user authentication. Often when you visit a website, log into a server, access a resource, you (and generally, your computer) needs to convince the provider that you’re authorized to access the resource. This authorization process can take many forms. Some sites require explicit user logins, which users complete using traditional username and passwords credentials, or (increasingly) advanced alternatives like MFA and passkeys. Some sites that don’t require explicit user credentials, or allow you to register a pseudonymous account; however even these sites often ask user agents to prove something. Typically this is some kind of basic “anti-bot” check, which can be done with a combination of long-lived cookies, CAPTCHAs, or whatever the heck Cloudflare does: [...]
Again that naively assumes that elimination of privacy is not a specific goal, which adds an additional barrier to gaining acceptance for anonymous approaches.
Previously:
(2025) Passkeys Are Incompatible With Open-Source Software
(2024) VISA and Biometric Authentication
(2022) NIST Drafts Revised Guidelines for Digital Identification in Federal Systems
[...] and more.
Retired programmer Kevin Boone has a guide to the retro-web in which he summarizes as the small web, IndieWeb, Gemini, Gopher, and so on.
I'm old enough to remember the earliest days of the world-wide web. HTTP and HTML weren't radical technologies – they just offered a new, more user-friendly way to use the Internet. Still, the web's potential was clear right from the those early days: it opened up the Internet to people who weren't necessarily computer scientists. As an academic, I vaguely realized that the web would have a huge, positive impact on communication between researchers and, indeed, it did.
Our vision, back in the mid-90s, was that the web would become, essentially, a decentralized library. Websites would be run by universities, health agencies, libraries, governmental departments, and even private individuals, all sharing knowledge for the common good.
What I didn't predict – what I don't think anybody predicted – was how the web would eventually come to dominate communication. And once it did, it became ripe for commercial exploitation.
What experience do Soylentils have with the smolweb or with Gemini or modern Gopher spaces? Or your take on undoing the September that never ended, even if for only a corner of the net?
Previously:
(2023) CERN Celebrates 30th Anniversary of the World Wide Web
(2018) History of Gopher
(2016) The Rise and Fall of the Gopher Protocol
(2014) World Wide Web Turns 25 years Old
LLMs can unmask pseudonymous users at scale with surprising accuracy:
Burner accounts on social media sites can increasingly be analyzed to identify the pseudonymous users who post to them using AI in research that has far-reaching consequences for privacy on the Internet, researchers said.
The finding, from a recently published research paper, is based on results of experiments correlating specific individuals with accounts or posts across more than one social media platform. The success rate was far greater than existing classical deanonymization work that relied on humans assembling structured data sets suitable for algorithmic matching or manual work by skilled investigators. Recall—that is, how many users were successfully deanonymized—was as high as 68 percent. Precision—meaning the rate of guesses that correctly identify the user—was up to 90 percent.
The findings have the potential to upend pseudonymity, an imperfect but often sufficient privacy measure used by many people to post queries and participate in sometimes sensitive public discussions while making it hard for others to positively identify the speakers. The ability to cheaply and quickly identify the people behind such obscured accounts opens them up to doxxing, stalking, and the assembly of detailed marketing profiles that track where speakers live, what they do for a living, and other personal information. This pseudonymity measure no longer holds.
"Our findings have significant implications for online privacy," the researchers wrote. "The average online user has long operated under an implicit threat model where they have assumed pseudonymity provides adequate protection because targeted deanonymization would require extensive effort. LLMs invalidate this assumption."
The researchers collected several datasets from public social media sites to test the techniques while preserving the privacy of the speakers. One of them collected posts from Hacker News and LinkedIn profiles and then linked them by using cross-platform references that appeared in user profiles. They then stripped all identifying references from the posts and ran a large language model on them. A second dataset was obtained from a Netflix release of micro-identities, such as individual preferences, recommendations, and transaction records. A 2008 research paper showed the list could identify users and ID their political affiliations and other personal information. The last technique split a single user's Reddit history.
"What we found is that these AI agents can do something that was previously very difficult: starting from free text (like an anonymized interview transcript) they can work their way to the full identity of a person," Simon Lermen, a co-author of the paper, told Ars. "This is a pretty new capability, previous approaches on re-identification generally required structured data, and two datasets with a similar schema that could be linked together."
Unlike those older pseudonymity-stripping methods, Lermen said, AI agents can browse the web and interact with it in many of the same ways humans do. They can use reasoning to match potential individuals. In one experiment, the researchers looked at responses given in a questionnaire Anthropic took about how various people use AI in their daily lives. Using the information taken from answers, the researchers were able to positively identify 7 percent of 125 participants.
While a 7 percent recall is relatively low, it demonstrates the growing capability of AI to identify people based on very general information they gave. "The fact that AI can do this at all is a noteworthy result," Lermen said. "And as AI systems get better, they will likely get better at finding more and more identities." While a 7 percent recall is relatively low, it demonstrates the growing capability of AI to identify people based on very general information they gave. "The fact that AI can do this at all is a noteworthy result," Lermen said. "And as AI systems get better, they will likely get better at finding more and more identities."
In a second experiment, the researchers gathered comments made in 2024 from the r/movies subreddit and at least one of five smaller communities: r/horror, r/MovieSuggestions, r/Letterboxd, r/TrueFilm, and r/MovieDetails. The results showed that the more movies a candidate discussed, the easier it was to identify them. An average of 3.1 percent of users sharing one movie could be identified with a 90 percent precision, and 1.2 percent of them at a 99 percent precision. With five to nine shared movies, 90 percent and 99 percent precision rose to 8.4 percent and 2.5 percent of users, respectively. More than 10 shared movies bumped the percentage to 48.1 percent and 17 percent.
In a third experiment, the researchers took 5,000 users from the Netflix dataset and added another 5,000 "distraction" identities of people not in the results. They then added to the list of 10,000 candidate profiles 5,000 query distractors comprising users who appear only in a query set, with no true match in the candidate pool.
Compared to a classical baseline that mimics the Netflix Prize attack to LLM deanonymization, the latter far outperformed the former.
The researchers wrote:
(a) The precision of classical attacks drops very fast, explaining its low recall. In contrast, the precision of LLM-based attacks decays more gracefully as the attacker makes more guesses. (b) The classical attack almost fails completely even at moderately low precision. In contrast, even the simplest LLM attack (Search) achieves non-trivial recall at low precision, and extending it with Reason and Calibrate steps doubles Recall @99% Precision.
The results show that LLMs, while still prone to false positives and other weaknesses, are quickly outstripping more traditional, resource-intensive methods for identifying users online.
The researchers went on to propose mitigations, including platforms enforcing rate limits on API access to user data, detecting automated scraping, and restricting bulk data exports. LLM providers could also monitor for the misuse of their models in deanonymization attacks and build guardrails that make models refuse deanonymization requests.
Of course, another option is for people to dramatically curb their use of social media, or at a minimum, regularly delete posts after a set time threshold.
If LLMs' success in deanonymizing people improves, the researchers warn, governments could use the techniques to unmask online critics, corporations can assemble customer profiles for "hyper-targeted advertising," and attackers could build profiles of targets at scale to launch highly personalized social engineering scams.
"Recent advances in LLM capabilities have made it clear that there is an urgent need to rethink various aspects of computer security in the wake of LLM-driven offensive cyber capabilities, the researchers warned. "Our work shows that the same is likely true for privacy as well."
Medical journal The Lancet blasts RFK Jr.'s health work as a failure:
The medical journal The Lancet did not pull any punches in a scathing editorial on Robert F. Kennedy Jr, calling the anti-vaccine activist's first year as US Health Secretary "a failure by most measures, especially his own."
The Lancet is one of the world's oldest academic medical journals still in publication and one of the most cited sources of peer-reviewed medical research. But it is also well-known for publishing an infamous study by prominent anti-vaccine activist and disgraced ex-physician Andrew Wakefield, which falsely claimed to find a link between vaccines and autism. The Lancet retracted the study more than a decade later.
Kennedy is among the prominent anti-vaccine activists who continue to embrace the thoroughly debunked claim, along with other dangerous conspiracy theories. The Lancet assailed Kennedy for spreading misinformation as the country's top health official and politicizing health policy at the expense of vulnerable Americans, including children.
"The destruction that Kennedy has wrought in one year might take generations to repair, and there is little hope for US health and science while he remains at the helm," the journal's editorial board wrote in its latest issue.
The journal's board noted that when Kennedy first took the role a year ago, he laid out noble ambitions of "radical transparency" and "gold-standard science." But within days Kennedy appeared to abandon those ideals. He rescinded a 54-year-old policy of soliciting public comments on federal actions, summarily dismissed expert advisors and scientific experts, issued altered health recommendations that run counter to decades of scientific evidence, and shut down programs studying critical health issues, such as air pollution and cancer.
As secretary of the US Department of Health and Human Services, Kennedy oversees the National Institutes of Health, the Food and Drug Administration, and the Centers for Disease Control and Prevention—all of which Kennedy is currently driving into the ground, according to the Lancet. His "politicization at the NIH, FDA, and CDC is imperiling the future of US science and innovation and throttling the public health enterprise that keeps the country safe today," the board wrote.
Kennedy has orchestrated an unprecedented overhaul of the CDC's childhood vaccine recommendations, which has been rejected by more than half of US states. He granted $1.6 million for a vaccine trial in Guinea-Bissau that the World Health Organization called "unethical," comparing it to the shameful Untreated Syphilis Study at Tuskegee. Under Kennedy, HHS has "made a habit of throwing good money after bad science," and elevated "junk science and fringe beliefs," the editorial states. Meanwhile, promising research, including on mRNA technology, and critical disease monitoring, such as of explosive cases of measles and pertussis (whooping-cough), are being abandoned or neglected.
In all, The Lancet joined a chorus of voices in the medical and scientific community calling for Kennedy's resignation and for Congress to hold him accountable.
While the medical journal had no kind words for Kennedy, the feeling is mutual. In the past, Kennedy has assailed top medical journals, including The Lancet, as "corrupt" for being influenced by the pharmaceutical industry—a common attack Kennedy uses against his critics.
Euro News reports on a growing movement against ChatCGPT after its contract with the Pentagon:
An online campaign urging users to quit OpenAI's ChatGPT is gathering momentum after a high-profile standoff between AI company Anthropic and the US Department of Defence.
Known as "QuitGPT", the movement claims that more than 1.5 million people have taken action, either by cancelling subscriptions, sharing boycott messages on social media, or signing up via quitgpt.org.
Last week, Anthropic CEO Dario Amodei said he "cannot in good conscience accede to the Pentagon's request" for unrestricted access to the company's AI systems.
"In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values," Amodei wrote. "Some uses are also simply outside the bounds of what today's technology can safely and reliably do."
Anthropic - which makes the chatbot Claude - is the last major AI firm yet to supply its technology to a new US military internal network.
The company reportedly faced a deadline from the Department of Defence to loosen ethical guardrails or risk losing a $200 million (€167 million) contract awarded last July to "prototype frontier AI capabilities that advance US national security".
In a statement published on its website, QuitGPT says: "On February 27, ChatGPT competitor Anthropic refused to give the Pentagon unrestricted access to its AI for mass surveillance of Americans or producing AI weapons that kill without human oversight."
QuitGPT argues that many users wrongly believe ChatGPT is the only viable AI assistant and is urging people to switch platforms. It recommends what it says are higher-privacy and open-source alternatives such as Confer, Alpine and Lumo, as well as corporate rivals including Gemini from Google and Claude from Anthropic.